EN FR
EN FR


Section: New Results

Higher level functions

Participants : Frédéric Alexandre, Laurent Bougrain, Octave Boussaton, Axel Hutt, Maxime Rio, Carolina Saavedra, Christian Weber.

Our activities concerned information analysis and interpretation and the design of numerical distributed and adaptive algorithms in interaction with biology and medical science. To better understand cortical signals, we choose a top-down approach for which data analysis techniques extract properties of underlying neural activity. To this end several unsupervised methods and supervised methods are investigated and integrated to extract features in measured brain signals. More specifically, we worked on Brain Computer Interfaces (BCI).

Using Neuronal States for Transcribing Cortical Activity into Muscular Effort

We studied the relations between the activity of corticomotoneuronal (CM) cells and the forces exerted by fingers. The activity of CM cells, located in the primary motor cortex is recorded in the thumb and index fingers area of a monkey. The activity of the fingers is recorded as they press two levers. The main idea of this work is to establish and use a collection of neuronal states. At any time, the neuronal state is defined by the firing rates of the recorded neurons. We assume that any such neuronal state is related to a typical variation (or absence of variation) in the muscular effort. Our forecasting model uses a linear combination of the firing rates, some synchrony information between spike trains and averaged variations of the positions of the levers [17] .

From the decoding of cortical activities to the control of a JACO robotic arm: a whole processing chain

We realized a complete processing chain for decoding intracranial data recorded in the cortex of a monkey and replicates the associated movements on a JACO robotic arm by Kinova. We developed specific modules inside the OpenViBE platform in order to build a Brain-Machine Interface able to read the data, compute the position of the robotic finger and send this position to the robotic arm. More precisely, two client/server protocols have been tested to transfer the finger positions: VRPN and a light protocol based on TCP/IP sockets. According to the requested finger position, the server calls the associated functions of an API by Kinova to move the fingers properly. Finally, we monitor the gap between the requested and actual fingers positions. This chain can be generalized to any movement of the arm or wrist [22] .

Wavelet-based Semblance for P300 Single-trial Detection

Electroencephalographic signals are usually contaminated by noise and artifacts making difficult to detect Event-Related Potential (ERP), specially in single trials. Wavelet denoising has been successfully applied to ERP detection, but usually works using channels information independently. This paper presents a new adaptive approach to denoise signals taking into account channels correlation in the wavelet domain. Moreover, we combined phase and amplitude information in the wavelet domain to automatically select a temporal window which increases class separability. Results on a classic Brain-Computer Interface application to spell characters using P300 detection show that our algorithm has a better accuracy with respect to the VisuShrink wavelet technique and XDAWN algorithm among 22 healthy subjects, and a better regularity than XDAWN [21] .

Filter for P300 detection

According to recent literature, the most appropriate preprocessing to improve P300 detection is still unknown or at least there is no consensus about it. Research papers refer to different low-pass filters, high-pass filters, baseline, subsampling or feature selection. Using a database with 23 healthy subjects we compared the effect on the letter accuracy (single-trial detection) provided by a linear support vector machine of a high-pass filter with cutoff frequencies from 0.1 to 1 Hz and a low-pass filter with cutoff frequencies from 8 to 60 Hz. According to this study, the best combination is for a band-pass filter of 0.1 to 15 Hz [16] .

Processing Stages of Visual Stimuli and Event-Related Potentials

Event-evoked potentials (ERP) in electroencephalograms reflect various visual processing stages according to their latencies and locations. Thus, ERP components such as the N100, N170 and the N200 which appears 100, 170 and 200 ms after the onset of a visual stimulus correspond respectively to a selective attention, the processing of color, shape and rotation (e.g. processing of human faces) and a degree of attention [24] .

Exploring the role of the thalamus in visuomotor tasks implicating non-standard ganglion cells

Non-standard ganglion cells in the retina have specific loci of projection in the visuomotor systems and particularly in the thalamus and the superior colliculus. In the thalamus, they feed the konio pathway of the LGN. Exploring the specificities of that pathway, we discovered it could be associated to the matrix system of thalamo-cortical projections, known to allow for diffuse patterns of connectivity and to play a major role in the synchronization of cortical regions by the thalamus.

An early model [23] led to the design of the corresponding information flows in the thalamo-cortical system, that we are now expanding, in the framework of the Keops project §  8.2 , to be applied to real visuomotor tasks.

Formalization of input/output retinal transformation regarding non-standard gangion cells behavior

We propose to implement the computational principles raised by the study on the K-cells of the retina using a variational specification of the visual front-end, with an important consequence: In such a framework, the GC are not to be considered individually, but as a network, yielding a mesoscopic view of the retinal process.

Given natural image sequences, fast event-detection properties appears to be exhibited by the mesoscopic collective non-standard behavior of a subclass of the so-called dorsal and ventral konio-cells (K-cells) that correspond to specific retinal output.

We consider this visual event detection mechanism to be based on image segmentation and specific natural statistical recognition, including temporal pattern recognition, yielding fast region categorization. We discuss how such sophisticated functionalities could be implemented in the biological tissues as a unique generic two-layered non-linear filtering mechanism with feedback. We use computer vision methods to propose an effective link between the observed functions and their possible implementation in the retinal network.

The available computational architecture is a two-layers network with non-separable local spatio-temporal convolution as input, and recurrent connections performing non-linear diffusion before prototype based visual event detection.

The numerical robustness of the proposed model has been experimentaly checked on real natural images. Finally, model predictions to be verified at the biological level are discussed [25] .